CNN on the CIFAR100 Data

Biostat 203B

Author

Dr. Hua Zhou @ UCLA

Published

February 3, 2026

1 Setup

Display system information for reproducibility.

import IPython
print(IPython.sys_info())
{'commit_hash': '49914f938',
 'commit_source': 'installation',
 'default_encoding': 'utf-8',
 'ipython_path': '/Users/huazhou/Library/Python/3.9/lib/python/site-packages/IPython',
 'ipython_version': '8.18.1',
 'os_name': 'posix',
 'platform': 'macOS-26.2-arm64-arm-64bit',
 'sys_executable': '/Library/Developer/CommandLineTools/usr/bin/python3',
 'sys_platform': 'darwin',
 'sys_version': '3.9.6 (default, Dec  2 2025, 07:27:58) \n'
                '[Clang 17.0.0 (clang-1700.6.3.2)]'}
sessionInfo()
R version 4.5.2 (2025-10-31)
Platform: aarch64-apple-darwin20
Running under: macOS Tahoe 26.2

Matrix products: default
BLAS:   /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib 
LAPACK: /Library/Frameworks/R.framework/Versions/4.5-arm64/Resources/lib/libRlapack.dylib;  LAPACK version 3.12.1

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

time zone: America/Los_Angeles
tzcode source: internal

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

loaded via a namespace (and not attached):
 [1] digest_0.6.39     fastmap_1.2.0     xfun_0.55         Matrix_1.7-4     
 [5] lattice_0.22-7    reticulate_1.44.1 knitr_1.51        htmltools_0.5.9  
 [9] png_0.1-8         rmarkdown_2.30    cli_3.6.5         grid_4.5.2       
[13] compiler_4.5.2    rprojroot_2.1.1   here_1.0.2        rstudioapi_0.17.1
[17] tools_4.5.2       evaluate_1.0.5    Rcpp_1.1.1        yaml_2.3.12      
[21] otel_0.2.0        rlang_1.1.7       jsonlite_2.0.0    htmlwidgets_1.6.4

Load some libraries.

# Load the pandas library
import pandas as pd
# Load numpy for array manipulation
import numpy as np
# Load seaborn plotting library
import seaborn as sns
import matplotlib.pyplot as plt

# Set font sizes in plots
sns.set(font_scale = 1.2)
# Display all columns
pd.set_option('display.max_columns', None)

# Load Tensorflow and Keras
import tensorflow as tf
/Users/huazhou/Library/Python/3.9/lib/python/site-packages/urllib3/__init__.py:34: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: https://github.com/urllib3/urllib3/issues/3020
  warnings.warn(
from tensorflow import keras
from tensorflow.keras import layers
library(keras)
The keras package is deprecated. Use the keras3 package instead.
library(jpeg)

In this example, we train a CNN (convolution neural network) on the CIFAR100 data set. Achieve testing accuracy 44.5% after 30 epochs. Random guess would have an accuracy of about 1%.

  • The CIFAR100 database is a large database of \(32 \times 32\) color images that is commonly used for training and testing machine learning algorithms.

  • 50,000 training images, 10,000 testing images.

2 Prepare data

Acquire data:

# Load the data and split it between train and test sets
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data()
# Training set
x_train.shape
(50000, 32, 32, 3)
y_train.shape
(50000, 1)
# Test set
x_test.shape
(10000, 32, 32, 3)
y_test.shape
(10000, 1)
cifar100 <- dataset_cifar100()
x_train <- cifar100$train$x
y_train <- cifar100$train$y
x_test <- cifar100$test$x
y_test <- cifar100$test$y

Training set:

dim(x_train)
[1] 50000    32    32     3
dim(y_train)
[1] 50000     1

Testing set:

dim(y_train)
[1] 50000     1
dim(y_test)
[1] 10000     1

For CNN, we keep the \(32 \times 32 \times 3\) tensor structure, instead of vectorizing into a long vector.

# Rescale
x_train = x_train / 255
x_test = x_test / 255
# Train
x_train.shape
(50000, 32, 32, 3)
# Test
x_test.shape
(10000, 32, 32, 3)
# rescale
x_train <- x_train / 255
x_test <- x_test / 255
dim(x_train)
[1] 50000    32    32     3
dim(x_test)
[1] 10000    32    32     3

Encode \(y\) as binary class matrix:

y_train = keras.utils.to_categorical(y_train, 100)
y_test = keras.utils.to_categorical(y_test, 100)
# Train
y_train.shape
(50000, 100)
# Test
y_test.shape
(10000, 100)
# First train instance
y_train[0]
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
      dtype=float32)
y_train <- to_categorical(y_train, 100)
y_test <- to_categorical(y_test, 100)
dim(y_train)
[1] 50000   100
dim(y_test)
[1] 10000   100
# head(y_train)

Show a few images:

import matplotlib.pyplot as plt

# Feature: 32x32 color image
for i in range(25):
  plt.figure()
  plt.imshow(x_train[i]);
  plt.show()

par(mar = c(0, 0, 0, 0), mfrow = c(5, 5))
index <- sample(seq(50000), 25)
for (i in index) plot(as.raster(x_train[i,,, ]))

3 Define the model

Define a sequential model (a linear stack of layers) with 2 fully-connected hidden layers (256 and 128 neurons):

model = keras.Sequential(
  [
    keras.Input(shape = (32, 32, 3)),
    layers.Conv2D(
      filters = 32, 
      kernel_size = (3, 3),
      padding = 'same',
      activation = 'relu',
      # input_shape = (32, 32, 3)
      ),
    layers.MaxPooling2D(pool_size = (2, 2)),
    layers.Conv2D(
      filters = 64, 
      kernel_size = (3, 3),
      padding = 'same',
      activation = 'relu'
      ),
    layers.MaxPooling2D(pool_size = (2, 2)),
    layers.Conv2D(
      filters = 128, 
      kernel_size = (3, 3),
      padding = 'same',
      activation = 'relu'
      ),
    layers.MaxPooling2D(pool_size = (2, 2)),
    layers.Conv2D(
      filters = 256, 
      kernel_size = (3, 3),
      padding = 'same',
      activation = 'relu'
      ),
    layers.MaxPooling2D(pool_size = (2, 2)),
    layers.Flatten(),
    layers.Dropout(rate = 0.5),
    layers.Dense(units = 512, activation = 'relu'),
    layers.Dense(units = 100, activation = 'softmax')
]
)

model.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d (Conv2D)             (None, 32, 32, 32)        896       
                                                                 
 max_pooling2d (MaxPooling2  (None, 16, 16, 32)        0         
 D)                                                              
                                                                 
 conv2d_1 (Conv2D)           (None, 16, 16, 64)        18496     
                                                                 
 max_pooling2d_1 (MaxPoolin  (None, 8, 8, 64)          0         
 g2D)                                                            
                                                                 
 conv2d_2 (Conv2D)           (None, 8, 8, 128)         73856     
                                                                 
 max_pooling2d_2 (MaxPoolin  (None, 4, 4, 128)         0         
 g2D)                                                            
                                                                 
 conv2d_3 (Conv2D)           (None, 4, 4, 256)         295168    
                                                                 
 max_pooling2d_3 (MaxPoolin  (None, 2, 2, 256)         0         
 g2D)                                                            
                                                                 
 flatten (Flatten)           (None, 1024)              0         
                                                                 
 dropout (Dropout)           (None, 1024)              0         
                                                                 
 dense (Dense)               (None, 512)               524800    
                                                                 
 dense_1 (Dense)             (None, 100)               51300     
                                                                 
=================================================================
Total params: 964516 (3.68 MB)
Trainable params: 964516 (3.68 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

Plot the model:

tf.keras.utils.plot_model(
    model,
    to_file = "model.png",
    show_shapes = True,
    show_dtype = False,
    show_layer_names = True,
    rankdir = "TB",
    expand_nested = False,
    dpi = 96,
    layer_range = None,
    show_layer_activations = False,
)
You must install pydot (`pip install pydot`) and install graphviz (see instructions at https://graphviz.gitlab.io/download/) for plot_model to work.

model <- keras_model_sequential()  %>% 
  layer_conv_2d(
    filters = 32, 
    kernel_size = c(3, 3),
    padding = "same", 
    activation = "relu",
    input_shape = c(32, 32, 3)
    ) %>%
  layer_max_pooling_2d(pool_size = c(2, 2)) %>%
  layer_conv_2d(
    filters = 64, 
    kernel_size = c(3, 3),
    padding = "same", 
    activation = "relu"
    ) %>%
  layer_max_pooling_2d(pool_size = c(2, 2)) %>%
  layer_conv_2d(
    filters = 128, 
    kernel_size = c(3, 3),
    padding = "same", 
    activation = "relu"
    ) %>%
  layer_max_pooling_2d(pool_size = c(2, 2)) %>%
  layer_conv_2d(
    filters = 256, 
    kernel_size = c(3, 3),
    padding = "same", 
    activation = "relu"
    ) %>%
  layer_max_pooling_2d(pool_size = c(2, 2)) %>%
  layer_flatten() %>%
  layer_dropout(rate = 0.5) %>%
  layer_dense(units = 512, activation = "relu") %>%
  layer_dense(units = 100, activation = "softmax")
  
summary(model)
Model: "sequential_1"
________________________________________________________________________________
 Layer (type)                       Output Shape                    Param #     
================================================================================
 conv2d_7 (Conv2D)                  (None, 32, 32, 32)              896         
 max_pooling2d_7 (MaxPooling2D)     (None, 16, 16, 32)              0           
 conv2d_6 (Conv2D)                  (None, 16, 16, 64)              18496       
 max_pooling2d_6 (MaxPooling2D)     (None, 8, 8, 64)                0           
 conv2d_5 (Conv2D)                  (None, 8, 8, 128)               73856       
 max_pooling2d_5 (MaxPooling2D)     (None, 4, 4, 128)               0           
 conv2d_4 (Conv2D)                  (None, 4, 4, 256)               295168      
 max_pooling2d_4 (MaxPooling2D)     (None, 2, 2, 256)               0           
 flatten_1 (Flatten)                (None, 1024)                    0           
 dropout_1 (Dropout)                (None, 1024)                    0           
 dense_3 (Dense)                    (None, 512)                     524800      
 dense_2 (Dense)                    (None, 100)                     51300       
================================================================================
Total params: 964516 (3.68 MB)
Trainable params: 964516 (3.68 MB)
Non-trainable params: 0 (0.00 Byte)
________________________________________________________________________________

Compile the model with appropriate loss function, optimizer, and metrics:

model.compile(
  loss = "categorical_crossentropy",
  optimizer = "rmsprop",
  metrics = ["accuracy"]
)
model %>% compile(
  loss = 'categorical_crossentropy',
  optimizer = optimizer_rmsprop(),
  metrics = c('accuracy')
)

4 Training and validation

80%/20% split for the train/validation set. On my laptop, each epoch takes about half minute.

batch_size = 128
epochs = 30

history = model.fit(
  x_train,
  y_train,
  batch_size = batch_size,
  epochs = epochs,
  validation_split = 0.2
)

Plot training history:

Code
hist = pd.DataFrame(history.history)
hist['epoch'] = np.arange(1, epochs + 1)
hist = hist.melt(
  id_vars = ['epoch'],
  value_vars = ['loss', 'accuracy', 'val_loss', 'val_accuracy'],
  var_name = 'type',
  value_name = 'value'
)
hist['split'] = np.where(['val' in s for s in hist['type']], 'validation', 'train')
hist['metric'] = np.where(['loss' in s for s in hist['type']], 'loss', 'accuracy')

# Accuracy trace plot
plt.figure()
sns.relplot(
  data = hist[hist['metric'] == 'accuracy'],
  kind = 'scatter',
  x = 'epoch',
  y = 'value',
  hue = 'split'
).set(
  xlabel = 'Epoch',
  ylabel = 'Accuracy'
);
plt.show()

Code
# Loss trace plot
plt.figure()
sns.relplot(
  data = hist[hist['metric'] == 'loss'],
  kind = 'scatter',
  x = 'epoch',
  y = 'value',
  hue = 'split'
).set(
  xlabel = 'Epoch',
  ylabel = 'Loss'
);
plt.show()

system.time({
history <- model %>% fit(
  x_train, y_train, 
  epochs = 30, batch_size = 128, 
  validation_split = 0.2
)
})
Epoch 1/30
313/313 - 15s - loss: 4.2474 - accuracy: 0.0483 - val_loss: 3.9335 - val_accuracy: 0.0956 - 15s/epoch - 48ms/step
Epoch 2/30
313/313 - 15s - loss: 3.6830 - accuracy: 0.1335 - val_loss: 3.4146 - val_accuracy: 0.1737 - 15s/epoch - 49ms/step
Epoch 3/30
313/313 - 16s - loss: 3.3142 - accuracy: 0.1978 - val_loss: 3.1397 - val_accuracy: 0.2249 - 16s/epoch - 51ms/step
Epoch 4/30
313/313 - 15s - loss: 3.0680 - accuracy: 0.2431 - val_loss: 2.9872 - val_accuracy: 0.2644 - 15s/epoch - 47ms/step
Epoch 5/30
313/313 - 15s - loss: 2.8514 - accuracy: 0.2851 - val_loss: 2.8056 - val_accuracy: 0.2953 - 15s/epoch - 48ms/step
Epoch 6/30
313/313 - 14s - loss: 2.6705 - accuracy: 0.3239 - val_loss: 2.7639 - val_accuracy: 0.3108 - 14s/epoch - 44ms/step
Epoch 7/30
313/313 - 14s - loss: 2.5165 - accuracy: 0.3535 - val_loss: 2.7509 - val_accuracy: 0.3205 - 14s/epoch - 45ms/step
Epoch 8/30
313/313 - 14s - loss: 2.3630 - accuracy: 0.3843 - val_loss: 2.5944 - val_accuracy: 0.3461 - 14s/epoch - 45ms/step
Epoch 9/30
313/313 - 14s - loss: 2.2423 - accuracy: 0.4092 - val_loss: 2.4330 - val_accuracy: 0.3760 - 14s/epoch - 45ms/step
Epoch 10/30
313/313 - 15s - loss: 2.1083 - accuracy: 0.4392 - val_loss: 2.6660 - val_accuracy: 0.3471 - 15s/epoch - 48ms/step
Epoch 11/30
313/313 - 14s - loss: 2.0031 - accuracy: 0.4625 - val_loss: 2.5329 - val_accuracy: 0.3585 - 14s/epoch - 46ms/step
Epoch 12/30
313/313 - 14s - loss: 1.8957 - accuracy: 0.4868 - val_loss: 2.4175 - val_accuracy: 0.3985 - 14s/epoch - 45ms/step
Epoch 13/30
313/313 - 14s - loss: 1.8051 - accuracy: 0.5041 - val_loss: 2.3713 - val_accuracy: 0.4029 - 14s/epoch - 45ms/step
Epoch 14/30
313/313 - 14s - loss: 1.7110 - accuracy: 0.5264 - val_loss: 2.3639 - val_accuracy: 0.4097 - 14s/epoch - 45ms/step
Epoch 15/30
313/313 - 14s - loss: 1.6266 - accuracy: 0.5466 - val_loss: 2.2787 - val_accuracy: 0.4204 - 14s/epoch - 46ms/step
Epoch 16/30
313/313 - 14s - loss: 1.5437 - accuracy: 0.5655 - val_loss: 2.3067 - val_accuracy: 0.4203 - 14s/epoch - 45ms/step
Epoch 17/30
313/313 - 14s - loss: 1.4614 - accuracy: 0.5835 - val_loss: 2.2478 - val_accuracy: 0.4348 - 14s/epoch - 45ms/step
Epoch 18/30
313/313 - 14s - loss: 1.3980 - accuracy: 0.5979 - val_loss: 2.2992 - val_accuracy: 0.4282 - 14s/epoch - 45ms/step
Epoch 19/30
313/313 - 14s - loss: 1.3303 - accuracy: 0.6163 - val_loss: 2.3866 - val_accuracy: 0.4291 - 14s/epoch - 46ms/step
Epoch 20/30
313/313 - 14s - loss: 1.2658 - accuracy: 0.6337 - val_loss: 2.3838 - val_accuracy: 0.4342 - 14s/epoch - 45ms/step
Epoch 21/30
313/313 - 14s - loss: 1.2076 - accuracy: 0.6444 - val_loss: 2.4009 - val_accuracy: 0.4293 - 14s/epoch - 46ms/step
Epoch 22/30
313/313 - 14s - loss: 1.1592 - accuracy: 0.6587 - val_loss: 2.4491 - val_accuracy: 0.4260 - 14s/epoch - 46ms/step
Epoch 23/30
313/313 - 14s - loss: 1.1104 - accuracy: 0.6694 - val_loss: 2.5286 - val_accuracy: 0.4327 - 14s/epoch - 45ms/step
Epoch 24/30
313/313 - 14s - loss: 1.0699 - accuracy: 0.6810 - val_loss: 2.4114 - val_accuracy: 0.4326 - 14s/epoch - 45ms/step
Epoch 25/30
313/313 - 14s - loss: 1.0232 - accuracy: 0.6910 - val_loss: 2.5181 - val_accuracy: 0.4358 - 14s/epoch - 45ms/step
Epoch 26/30
313/313 - 14s - loss: 0.9830 - accuracy: 0.7037 - val_loss: 2.4894 - val_accuracy: 0.4366 - 14s/epoch - 45ms/step
Epoch 27/30
313/313 - 14s - loss: 0.9447 - accuracy: 0.7132 - val_loss: 2.5211 - val_accuracy: 0.4250 - 14s/epoch - 45ms/step
Epoch 28/30
313/313 - 14s - loss: 0.9159 - accuracy: 0.7207 - val_loss: 2.4663 - val_accuracy: 0.4372 - 14s/epoch - 45ms/step
Epoch 29/30
313/313 - 14s - loss: 0.8859 - accuracy: 0.7303 - val_loss: 2.5689 - val_accuracy: 0.4357 - 14s/epoch - 45ms/step
Epoch 30/30
313/313 - 14s - loss: 0.8567 - accuracy: 0.7394 - val_loss: 2.6534 - val_accuracy: 0.4308 - 14s/epoch - 45ms/step
    user   system  elapsed 
2667.991  482.129  431.219 
plot(history)

5 Testing

Evaluate model performance on the test data:

score = model.evaluate(x_test, y_test, verbose = 0)
print("Test loss:", score[0])
Test loss: 2.5566861629486084
print("Test accuracy:", score[1])
Test accuracy: 0.43939998745918274

model %>% evaluate(x_test, y_test)
313/313 - 2s - loss: 2.5952 - accuracy: 0.4347 - 2s/epoch - 6ms/step
    loss accuracy 
2.595201 0.434700 

Generate predictions on new data:

model %>% predict(x_test) %>% k_argmax()
313/313 - 2s - 2s/epoch - 6ms/step
tf.Tensor([30 33 55 ... 51 42 92], shape=(10000), dtype=int64)